6 research outputs found
Predicting Anchor Links between Heterogeneous Social Networks
People usually get involved in multiple social networks to enjoy new services
or to fulfill their needs. Many new social networks try to attract users of
other existing networks to increase the number of their users. Once a user
(called source user) of a social network (called source network) joins a new
social network (called target network), a new inter-network link (called anchor
link) is formed between the source and target networks. In this paper, we
concentrated on predicting the formation of such anchor links between
heterogeneous social networks. Unlike conventional link prediction problems in
which the formation of a link between two existing users within a single
network is predicted, in anchor link prediction, the target user is missing and
will be added to the target network once the anchor link is created. To solve
this problem, we use meta-paths as a powerful tool for utilizing heterogeneous
information in both the source and target networks. To this end, we propose an
effective general meta-path-based approach called Connector and Recursive
Meta-Paths (CRMP). By using those two different categories of meta-paths, we
model different aspects of social factors that may affect a source user to join
the target network, resulting in the formation of a new anchor link. Extensive
experiments on real-world heterogeneous social networks demonstrate the
effectiveness of the proposed method against the recent methods.Comment: To be published in "Proceedings of the 2016 IEEE/ACM International
Conference on Advances in Social Networks Analysis and Mining (ASONAM)
ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees
Graph Neural Networks (GNNs) have become a popular tool for learning on
graphs, but their widespread use raises privacy concerns as graph data can
contain personal or sensitive information. Differentially private GNN models
have been recently proposed to preserve privacy while still allowing for
effective learning over graph-structured datasets. However, achieving an ideal
balance between accuracy and privacy in GNNs remains challenging due to the
intrinsic structural connectivity of graphs. In this paper, we propose a new
differentially private GNN called ProGAP that uses a progressive training
scheme to improve such accuracy-privacy trade-offs. Combined with the
aggregation perturbation technique to ensure differential privacy, ProGAP
splits a GNN into a sequence of overlapping submodels that are trained
progressively, expanding from the first submodel to the complete model.
Specifically, each submodel is trained over the privately aggregated node
embeddings learned and cached by the previous submodels, leading to an
increased expressive power compared to previous approaches while limiting the
incurred privacy costs. We formally prove that ProGAP ensures edge-level and
node-level privacy guarantees for both training and inference stages, and
evaluate its performance on benchmark graph datasets. Experimental results
demonstrate that ProGAP can achieve up to 5%-10% higher accuracy than existing
state-of-the-art differentially private GNNs
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation
International audienceIn this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP). We propose a novel differentially private GNN based on Aggregation Perturbation (GAP), which adds stochastic noise to the GNN's aggregation function to statistically obfuscate the presence of a single edge (edge-level privacy) or a single node and all its adjacent edges (node-level privacy). Tailored to the specifics of private learning, GAP's new architecture is composed of three separate modules: (i) the encoder module, where we learn private node embeddings without relying on the edge information; (ii) the aggregation module, where we compute noisy aggregated node embeddings based on the graph structure; and (iii) the classification module, where we train a neural network on the private aggregations for node classification without further querying the graph edges. GAP's major advantage over previous approaches is that it can benefit from multi-hop neighborhood aggregations, and guarantees both edge-level and node-level DP not only for training, but also at inference with no additional costs beyond the training's privacy budget. We analyze GAP's formal privacy guarantees using Rényi DP and conduct empirical experiments over three real-world graph datasets. We demonstrate that GAP offers significantly better accuracy-privacy trade-offs than state-of-the-art DP-GNN approaches and naive MLP-based baselines. Our code is publicly available at https://github.com/sisaman/GAP